Your browser doesn't support javascript.
Show: 20 | 50 | 100
Results 1 - 20 de 25
Filter
1.
Auditory Perception & Cognition ; : No Pagination Specified, 2023.
Article in English | APA PsycInfo | ID: covidwho-2323632

ABSTRACT

Restrictions on face-to-face interactions due to the outbreak of the coronavirus pandemic (COVID-19) in early 2020 have impacted experimental behavioral research. The rapid change from in-person to online data collections has been challenging in many behavioral studies, especially those that require vocal production, and the quality of the remotely collected data needs to be investigated. The current study examines the recording quality and corresponding measures of vocal production accuracy in online and in-person settings using two measurements: harmonic-to-noise ratio (HNR) and fundamental frequency, f 0. Participants imitated pitch patterns extracted from recordings of song or speech, either in a laboratory or via an online platform. The results showed that the recordings from the online setting had higher HNR than those from the in-person setting, whereas the pitch imitation accuracy in both settings did not differ. We also report an experiment that simulated differences between the online and in-person settings within participants, focusing on software used, type of microphone, and presence of ambient noise. Pitch accuracy did not differ according to these variables, except ambient noise, whereas HNR again varied across conditions. Based on these results we conclude that measures of pitch accuracy are reliable across these different types of data collection, whereas finer-grained spectral measures like HNR might be affected by various factors. (PsycInfo Database Record (c) 2023 APA, all rights reserved)

2.
Am J Otolaryngol ; 44(5): 103929, 2023 May 21.
Article in English | MEDLINE | ID: covidwho-2326168

ABSTRACT

BACKGROUND: The mask mandate during the COVID-19 pandemic leads to communication challenges as sound energy gets reduced and the visual cues are lost due to the face mask. This study examines the impact of a face mask on sound energy and compares speech recognition performance between a basic and a premium hearing aid. METHODS: Participants watched four video clips (a female and a male speaker with and without a face mask) and repeated the target sentences in various test conditions. Real-ear measurement was performed to investigate the changes in sound energy in no mask, surgical, and N95 mask conditions. RESULTS: With the face mask on, sound energy significantly decreased for all types of masks. For speech recognition, the premium hearing aid showed significant improvement in the mask condition. CONCLUSION: The findings emphasize and encourage health care professionals to actively use communication strategies, such as speaking slowly and reducing background noise, when interacting with individuals with hearing loss.

3.
Atten Percept Psychophys ; 2023 May 15.
Article in English | MEDLINE | ID: covidwho-2314216

ABSTRACT

Previous research demonstrates listeners dynamically adjust phonetic categories in line with lexical context. While listeners show flexibility in adapting speech categories, recalibration may be constrained when variability can be attributed externally. It has been hypothesized that when listeners attribute atypical speech input to a causal factor, phonetic recalibration is attenuated. The current study investigated this theory directly by examining the influence of face masks, an external factor that affects both visual and articulatory cues, on the magnitude of phonetic recalibration. Across four experiments, listeners completed a lexical decision exposure phase in which they heard an ambiguous sound in either /s/-biasing or /ʃ/-biasing lexical contexts, while simultaneously viewing a speaker with a mask off, mask on the chin, or mask over the mouth. Following exposure, all listeners completed an auditory phonetic categorization test along an /ʃ/-/s/ continuum. In Experiment 1 (when no face mask was present during exposure trials), Experiment 2 (when the face mask was on the chin), Experiment 3 (when the face mask was on the mouth during ambiguous items), and Experiment 4 (when the face mask was on the mouth during the entire exposure phase), listeners showed a robust and equivalent phonetic recalibration effect. Recalibration manifested as greater proportion /s/ responses for listeners in the /s/-biased exposure group, relative to listeners in the /ʃ/-biased exposure group. Results support the notion that listeners do not causally attribute face masks with speech idiosyncrasies, which may reflect a general speech learning adjustment during the COVID-19 pandemic.

4.
Dissertation Abstracts International: Section B: The Sciences and Engineering ; 84(6-B):No Pagination Specified, 2023.
Article in English | APA PsycInfo | ID: covidwho-2301457

ABSTRACT

Interacting with computer systems with speech is more natural than conventional interaction methods. It is also more accessible since it does not require precise selection of small targets or rely entirely on visual elements like virtual keys and buttons. Speech also enables contactless interaction, which is of particular interest when touching public devices is to be avoided, such as the recent COVID-19 pandemic situation. However, speech is unreliable in noisy places and can compromise users' privacy and security when in public. Image-based silent speech, which primarily converts tongue and lip movements into text, can mitigate many of these challenges. Since it does not rely on acoustic features, users can silently speak without vocalizing the words. It has also been demonstrated as a promising input method on mobile devices and has been explored for a variety of audiences and contexts where the acoustic signal is unavailable (e.g., people with speech disorders) or unreliable (e.g., noisy environment). Though the method shows promise, very little is known about peoples' perceptions regarding using it, their anticipated performance of silent speech input, and their approach to avoiding potential misrecognition errors. Besides, existing silent speech recognition models are slow and error prone, or use stationary, external devices that are not scalable. In this dissertation, we attempt to address these issues. Towards this, we first conduct a user study to explore users' attitudes towards silent speech with a particular focus on social acceptance. Results show that people perceive silent speech as more socially acceptable than speech input but are concerned about input recognition, privacy, and security issues. We then conduct a second study examining users' error tolerance with speech and silent speech input methods. Results reveal that users are willing to tolerate more errors with silent speech input than speech input as it offers a higher degree of privacy and security. We conduct another study to identify a suitable method for providing real-time feedback on silent speech input. Results show that users find an feedback method effective and significantly more private and secure than a commonly used video feedback method. In light of these findings, which establish silent speech as an acceptable and desirable mode of interaction, we take a step forward to address the technological limitations of existing image-based silent speech recognition models to make them more usable and reliable on computer systems. Towards this, first, we develop LipType, an optimized version of LipNet for improved speed and accuracy. We then develop an independent repair model that processes video input for poor lighting conditions, when applicable, and corrects potential errors in output for increased accuracy. We then test this model with LipType and other speech and silent speech recognizers to demonstrate its effectiveness. In an evaluation, the model reduced word error rate by 57% compared to the state-of-the-art without compromising the overall computation time. However, we identify that the model is still susceptible to failure due to the variability of user characteristics. A person's speaking rate, for instance, is a fundamental user characteristic that can influence speech recognition performance due to the variation in acoustic properties of human speech production. We formally investigate the effects of speaking rate on silent speech recognition. Results revealed that native users speak about 8% faster than non-native users, but both groups slow down at comparable rates (34-40%) when interacting with silent speech, mostly to increase its accuracy rates. A follow-up experiment confirms that slowing down does improve the accuracy of silent speech recognition. (PsycInfo Database Record (c) 2023 APA, all rights reserved)

5.
Dissertation Abstracts International Section A: Humanities and Social Sciences ; 84(2-A):No Pagination Specified, 2023.
Article in English | APA PsycInfo | ID: covidwho-2271879

ABSTRACT

The purpose of this qualitative descriptive case study was to examine the lack of equitable educational opportunities for English learners and school administrators' perceptions of their role in addressing inequities in schools that serve English learners. This descriptive case study examined contextualized phenomena within specific boundaries of the urban setting. Descriptive case study data were collected through semi-structured interviews, focus group interviews, and a review of artifacts and documents. The interview data were coded using the keywords in context. By analyzing patterns in the urban school leaders' speech, I gained insight into their perception of their role in addressing inequities. The study results answered the questions, filled the literature gap, and contributed to the growing scholarly work body. The following research questions provided a basis for identifying school administrators' perceptions about addressing educational inequities in schools and their suggestions for other school leaders to interpret and implement English learner policy: What are school administrators' perceptions of their role in addressing inequities in access to and quality of education for English learners? What suggestions do school administrators have to address inequities in access to and quality of education for English learners? What are school administrators' perceptions of how the COVID-19 pandemic exacerbated educational inequities for English learners? The idea of language as a mediator of thought guided the descriptions of the language used by the 10 participants and the themes which emerged. The results are consistent with existing research related to sociocultural theory. The focus group discussions, individual interviews, and documents reviewed were used to triangulate the data collected to confirm the findings. The study findings included the administrators' prior experiences, sociocultural context, and speech, indicating their perception of their role in addressing educational inequities. (PsycInfo Database Record (c) 2022 APA, all rights reserved)

6.
Dissertation Abstracts International: Section B: The Sciences and Engineering ; 84(1-B):No Pagination Specified, 2023.
Article in English | APA PsycInfo | ID: covidwho-2259984

ABSTRACT

Visual speech information, especially that provided by the mouth and lips, is important during face-to-face communication. This has been made more evident by the increased difficulty of speech perception because mask usage has become commonplace in response to the COVID-19 pandemic. Masking obscures the mouth and lips, thus eliminating meaningful information from visual cues that are used to perceive speech correctly. To fully understand the perceptual benefits afforded by visual information during audiovisual speech perception, it is necessary to explore the underlying neural mechanisms involved. While several studies have shown neural activation of auditory regions in response to visual speech, the information represented by these activations remain poorly understood. The objective of this dissertation is to investigate the neural bases for how visual speech modulates the temporal, spatial, and spectral components of audiovisual speech perception, and the type of information encoded by these signals.Most studies approach this question by using techniques sensitive to one or two important dimensions (temporal, spatial, or spectral). Even in studies that have used intracranial electroencephalography (iEEG), which is sensitive to all three dimensions, research conventionally quantifies effects using single-subject statistics, leaving group-level variance unexplained. In Study 1, I overcome these shortcomings by investigating how vision modulates auditory speech processes across spatial, temporal and spectral dimensions in a large group of epilepsy patients with intracranial electrodes implanted (n = 21). The results of this study demonstrate that visual speech produced multiple spatiotemporally distinct patterns of theta, beta, and high-gamma power changes in auditory regions in the superior temporal gyrus (STG).While study 1 showed that visual speech evoked activity in auditory areas, it is not clear what, if any, information is encoded by these activations. In Study 2, I investigated whether these distinct patterns of activity in the STG, produced by visual speech, contain information about what word is being said. To address this question, I utilized a support-vector machine classifier to decode the identities of four word types (consonants beginning with 'b', 'd', 'g', and 'f') from activity in the STG recorded during spoken (phonemes: basic units of speech) or silent visual speech (visemes: basic units of lipreading information). Results from this study indicated that visual speech indeed encodes lipreading information in auditory regions.Studies 1 and 2 provided evidence from iEEG data obtained from patients with epilepsy. In order to replicate these results in a normative population and to leverage improved spatial resolution, in Study 3 I acquired data from a large cohort of normative subjects (n = 64) during a randomized event-related functional magnetic resonance imaging (fMRI) experiment. Similar to that of Study 2, I used machine learning to test for classification of phonemes and visemes (/fafa/, /kaka/, /mama/) from auditory, auditory-visual, and visual regions in the brain. Results conceptually replicated the results of Study 2, such that phoneme and viseme identities could both be classified from the STG, revealing that this information is encoded through distributed representations. Further analyses revealed similar spatial patterns in the STG between phonemes and visemes, consistent with the model that viseme information is used to target corresponding phoneme populations in auditory regions. Taken together, the findings from this dissertation advance our understanding of the neural mechanisms that underlie the multiple ways in which vision alters the temporal, spatial and spectral components of audiovisual speech perception. (PsycInfo Database Record (c) 2022 APA, all rights reserved)

7.
51st International Congress and Exposition on Noise Control Engineering, Internoise 2022 ; 2022.
Article in English | Scopus | ID: covidwho-2259964

ABSTRACT

Wearing face masks (alongside physical distancing) provides some protection against COVID-19. Face masks can also change how people communicate and subsequently affect speech signal quality. This study investigated how two common face mask types affect acoustic analysis of speech perception. Quantitative and qualitative assessments were carried out in terms of measuring the sound pressure levels and playing back to a group of people. The responses gauged proved that masks alter the speech signal with downstream effects on speech intelligibility of a speaker. Masks muffle speech sounds at higher frequencies and hence the acoustic effect of a speaker wearing a face mask is equivalent to the listener having slight high frequency hearing loss. When asked on the perception of audibility, over 83% of the participants were able to clearly hear the no mask audio clip, however, 41% of the participants thought it was moderately audible with N95 and face shield masks. Due to no visual access, face masks act as communication barriers with 50% of the people finding to understand people because they could not read their lips. Nevertheless, based on these findings it is reasonable to hypothesize that wearing a mask would attenuate speech spectra at similar frequency bands. © 2022 Internoise 2022 - 51st International Congress and Exposition on Noise Control Engineering. All rights reserved.

8.
Journal of the American Academy of Audiology ; 33(2):98-104, 2022.
Article in English | APA PsycInfo | ID: covidwho-2284227

ABSTRACT

Background: The COVID-19 pandemic has made wearing face masks a common habit in public places. Several reports have underlined the increased difficulties encountered by deaf people in speech comprehension, resulting in a higher risk of social isolation and psychological distress. Purpose: To address the detrimental effect of different types of face masks on speech perception, according to the listener hearing level and background noise. Research design: Quasi-experimental cross-sectional study. Study sample: Thirty patients were assessed: 16 with normal hearing [NH], and 14 hearing-impaired [HI] with moderate hearing loss. Data collection and analysis: A speech perception test (TAUV) was administered by an operator trained to speak at 65 dB, without a face mask, with a surgical mask, and with a KN95/FFP2 face mask, in a quiet and in a noisy environment (cocktail party noise, 55 dB). The Hearing Handicap Index for Adults (HHI-A) was administered twice, asking subjects to complete it for the period before and after the pandemic outburst. A 2-way repeated-measure analysis of variance was performed. Results: The NH group showed a significant difference between the no-mask and the KN95/FFP2-mask condition in noise (p = 0.01). The HI group showed significant differences for surgical or KN95/FFP2 mask compared with no-mask, and for KN95/FFP2 compared with surgical mask, in quiet and in noise (p < 0.001). An increase in HHI-A scores was recorded for the HI patients (p < 0.001). Conclusion: Face masks have a detrimental effect on speech perception especially for HI patients, potentially worsening their hearing-related quality of life. (PsycInfo Database Record (c) 2023 APA, all rights reserved)

9.
BMC Public Health ; 23(1): 652, 2023 04 05.
Article in English | MEDLINE | ID: covidwho-2262014

ABSTRACT

BACKGROUND: COVID-19 measures, such as face masks, have clear consequences for the communicative accessibility of people with hearing impairment because they reduce speech perception. As communication is essential to participate in society, this might have impact on their mental well-being. This study was set out to investigate the impact of the COVID-19 measures on the communicative accessibility and well-being of adults with hearing impairment. METHOD: Two groups of adults took part in this study, with (N = 150) and without (N = 50) hearing loss. The participants answered statements on a five point Likert-scale. Statements regarding communicative accessibility involved speech perception abilities, behavioral changes and access to information. Well-being was measured at the overall level in daily community life and at work, and in particular also with respect to perceived stress. We asked participants with hearing impairment on their audiological needs during the pandemic. RESULTS: Significant group differences were found on speech perception abilities due to COVID-19 measures. Behavioral changes were observed to compensate for the loss in speech perception. Hearing loss was associated with an increased request for repetition or for removal of the face mask. Using information technology (e.g. Zoom) or contacting colleagues did not pose any major problems for the hearing group, whereas participants with hearing loss gave mixed responses. A significant difference emerged between groups on well-being in daily life, but not on well-being at work or perceived stress. CONCLUSIONS: This study shows the detrimental effect of COVID-19 measures on the communicative accessibility of individuals with hearing loss. It also shows their resilience as only partial group differences were found on well-being. Protective factors are indicated, such as access to information and audiological care.


Subject(s)
COVID-19 , Deafness , Hearing Loss , Humans , Adult , Communication , Hearing
10.
Theory and Practice in Language Studies ; 13(1):78-88, 2023.
Article in English | ProQuest Central | ID: covidwho-2204364

ABSTRACT

[...]eight adult learners were interviewed to give their perceptions towards this newly designed module. [...]language learners of English (ESL) often faced difficulties in speaking skills, thus, they could not perform well in speaking assessment. A. Underpinning Theory The Speaking Assessment Module was developed for adult learners to experience learning remotely via the online distance learning approach (ODL). Since the module is designed for adult learners, it is equipped with teaching and learning materials based on andragogy learning theory. [...]online distance learning (ODL) has become a learning option for Malaysians.

11.
Telkomnika ; 21(1):159-167, 2023.
Article in English | Academic Search Complete | ID: covidwho-2164258

ABSTRACT

Human-computer interactions benefit greatly from emotion recognition from speech. To promote a contact-free environment in this coronavirus disease 2019 (COVID'19) pandemic situation, most digitally based systems used speech-based devices. Consequently, this emotion detection from speech has many beneficial applications for pathology. The vast majority of speech emotion recognition (SER) systems are designed based on machine learning or deep learning models. Therefore, need greater computing power and requirements. This issue was addressed by developing traditional algorithms for feature selection. Recent research has shown that nature-inspired or evolutionary algorithms such as equilibrium optimization (EO) and cuckoo search (CS) based meta-heuristic approaches are superior to the traditional feature selection (FS) models in terms of recognition performance. The purpose of this study is to investigate the impact of feature selection meta-heuristic approaches on emotion recognition from speech. To achieve this, we selected the rayerson audio-visual database of emotional speech and song (RAVDESS) database and obtained maximum recognition accuracy of 89.64% using the EO algorithm and 92.71% using the CS algorithm. For this final step, we plotted the associated precision and F1 score for each of the emotional classes. [ FROM AUTHOR]

12.
Dissertation Abstracts International Section A: Humanities and Social Sciences ; 84(2-A):No Pagination Specified, 2023.
Article in English | APA PsycInfo | ID: covidwho-2147490

ABSTRACT

The purpose of this qualitative descriptive case study was to examine the lack of equitable educational opportunities for English learners and school administrators' perceptions of their role in addressing inequities in schools that serve English learners. This descriptive case study examined contextualized phenomena within specific boundaries of the urban setting. Descriptive case study data were collected through semi-structured interviews, focus group interviews, and a review of artifacts and documents. The interview data were coded using the keywords in context. By analyzing patterns in the urban school leaders' speech, I gained insight into their perception of their role in addressing inequities. The study results answered the questions, filled the literature gap, and contributed to the growing scholarly work body. The following research questions provided a basis for identifying school administrators' perceptions about addressing educational inequities in schools and their suggestions for other school leaders to interpret and implement English learner policy: What are school administrators' perceptions of their role in addressing inequities in access to and quality of education for English learners? What suggestions do school administrators have to address inequities in access to and quality of education for English learners? What are school administrators' perceptions of how the COVID-19 pandemic exacerbated educational inequities for English learners? The idea of language as a mediator of thought guided the descriptions of the language used by the 10 participants and the themes which emerged. The results are consistent with existing research related to sociocultural theory. The focus group discussions, individual interviews, and documents reviewed were used to triangulate the data collected to confirm the findings. The study findings included the administrators' prior experiences, sociocultural context, and speech, indicating their perception of their role in addressing educational inequities. (PsycInfo Database Record (c) 2022 APA, all rights reserved)

13.
Dissertation Abstracts International: Section B: The Sciences and Engineering ; 84(1-B):No Pagination Specified, 2023.
Article in English | APA PsycInfo | ID: covidwho-2125012

ABSTRACT

Visual speech information, especially that provided by the mouth and lips, is important during face-to-face communication. This has been made more evident by the increased difficulty of speech perception because mask usage has become commonplace in response to the COVID-19 pandemic. Masking obscures the mouth and lips, thus eliminating meaningful information from visual cues that are used to perceive speech correctly. To fully understand the perceptual benefits afforded by visual information during audiovisual speech perception, it is necessary to explore the underlying neural mechanisms involved. While several studies have shown neural activation of auditory regions in response to visual speech, the information represented by these activations remain poorly understood. The objective of this dissertation is to investigate the neural bases for how visual speech modulates the temporal, spatial, and spectral components of audiovisual speech perception, and the type of information encoded by these signals.Most studies approach this question by using techniques sensitive to one or two important dimensions (temporal, spatial, or spectral). Even in studies that have used intracranial electroencephalography (iEEG), which is sensitive to all three dimensions, research conventionally quantifies effects using single-subject statistics, leaving group-level variance unexplained. In Study 1, I overcome these shortcomings by investigating how vision modulates auditory speech processes across spatial, temporal and spectral dimensions in a large group of epilepsy patients with intracranial electrodes implanted (n = 21). The results of this study demonstrate that visual speech produced multiple spatiotemporally distinct patterns of theta, beta, and high-gamma power changes in auditory regions in the superior temporal gyrus (STG).While study 1 showed that visual speech evoked activity in auditory areas, it is not clear what, if any, information is encoded by these activations. In Study 2, I investigated whether these distinct patterns of activity in the STG, produced by visual speech, contain information about what word is being said. To address this question, I utilized a support-vector machine classifier to decode the identities of four word types (consonants beginning with 'b', 'd', 'g', and 'f') from activity in the STG recorded during spoken (phonemes: basic units of speech) or silent visual speech (visemes: basic units of lipreading information). Results from this study indicated that visual speech indeed encodes lipreading information in auditory regions.Studies 1 and 2 provided evidence from iEEG data obtained from patients with epilepsy. In order to replicate these results in a normative population and to leverage improved spatial resolution, in Study 3 I acquired data from a large cohort of normative subjects (n = 64) during a randomized event-related functional magnetic resonance imaging (fMRI) experiment. Similar to that of Study 2, I used machine learning to test for classification of phonemes and visemes (/fafa/, /kaka/, /mama/) from auditory, auditory-visual, and visual regions in the brain. Results conceptually replicated the results of Study 2, such that phoneme and viseme identities could both be classified from the STG, revealing that this information is encoded through distributed representations. Further analyses revealed similar spatial patterns in the STG between phonemes and visemes, consistent with the model that viseme information is used to target corresponding phoneme populations in auditory regions. Taken together, the findings from this dissertation advance our understanding of the neural mechanisms that underlie the multiple ways in which vision alters the temporal, spatial and spectral components of audiovisual speech perception. (PsycInfo Database Record (c) 2022 APA, all rights reserved)

14.
Otolaryngology - Head and Neck Surgery ; 167(1 Supplement):P100-P101, 2022.
Article in English | EMBASE | ID: covidwho-2064483

ABSTRACT

Introduction: To evaluate and validate the use of a remote check application in real life to enable cochlear implant (CI) recipients or parent/caregivers to monitor at home their progress and to help their clinicians to determine and plan for clinical visits based on their needs. Method(s): A total of 110 implanted patients (age range: 6-77 years;12-month implant experience and familiarity with vocabulary for digits 0 to 9) were included in this study, in which each subject served as their own control. The test battery includes an implant-site photograph, impedance measurements, datalogs, questionnaires, speech perception, and aided threshold tests. Chi-square test was used for statistical analysis of the results obtained at home vs clinical setting. Result(s): In all but 2 cases (108/110, 98%) the test battery reached the same conclusion as the clinician in determining whether the recipient required any clinical action. Of recipients and parents/ caregivers, 90% (100/110) reported being "satisfied" or "very satisfied" if their clinic visits were based on results from the selfadministered remote test battery (P<.001). Reasons for satisfaction included the convenience of remote monitoring, the ability to request an appointment if needed, and the continued involvement of their clinician. Satisfaction ratings with the remote monitoring concept were moderately to strongly correlated with perceived improvement in convenience and time involved. Conclusion(s): Most respondents recognized that the remote check battery has the potential to save time, reduce costs, and increase the convenience of aftercare. The clinicians with remote check battery are adequately informed regarding patient management, appointment scheduling, and required clinical actions. This may also further support global case management during COVID-19 pandemic time of recommended social distancing.

15.
Healthcare (Basel) ; 10(9)2022 Sep 07.
Article in English | MEDLINE | ID: covidwho-2010014

ABSTRACT

Face masks are mandatory during the COVID-19 pandemic, leading to attenuation of sound energy and loss of visual cues which are important for communication. This study explores how a face mask affects speech performance for individuals with and without hearing loss. Four video recordings (a female speaker with and without a face mask and a male speaker with and without a face mask) were used to examine individuals' speech performance. The participants completed a listen-and-repeat task while watching four types of video recordings. Acoustic characteristics of speech signals based on mask type (no mask, surgical, and N95) were also examined. The availability of visual cues was beneficial for speech understanding-both groups showed significant improvements in speech perception when they were able to see the speaker without the mask. However, when the speakers were wearing the mask, no statistical significance was observed between no visual cues and visual cues conditions. Findings of the study demonstrate that provision of visual cues is beneficial for speech perception for individuals with normal hearing and hearing impairment. This study adds value to the importance of the use of communication strategies during the pandemic where visual information is lost due to the face mask.

16.
Appl Acoust ; 197: 108940, 2022 Aug.
Article in English | MEDLINE | ID: covidwho-1956084

ABSTRACT

With the COVID-19 pandemic, the usage of personal protective equipment (PPE) has become 'the new normal'. Both surgical masks and N95 masks with a face shield are widely used in healthcare settings to reduce virus transmission, but the use of these masks has a negative impact on speech perception. Therefore, transparent masks are recommended to solve this dilemma. However, there is a lack of quantitative studies regarding the effect of PPE on speech perception. This study aims to compare the effect on speech perception of different types of PPE (surgical masks, N95 masks with face shield and transparent masks) in healthcare settings, for listeners with normal hearing in the audiovisual or auditory-only modality. The Bamford-Kowal-Bench (BKB)-like Mandarin speech stimuli were digitally recorded by a G.R.A.S KEMAR manikin without and with masks (surgical masks, N95 masks with face shield and transparent masks). Two variants of video display were created (with or without visual cues) and tagged to the corresponding audio recordings. The speech recording and video were presented to listeners simultaneously in each of four conditions: unattenuated speech with visual cues (no mask); surgical mask attenuated speech without visual cues; N95 mask with face shield attenuated speech without visual cues; and transparent mask attenuated speech with visual cues. The signal-to-noise ratio for 50 % correct scores (SNR50) threshold in noise was measured for each condition in the presence of four-talker babble. Twenty-four subjects completed the experiment. Acoustic spectra obtained from all types of masks were primarily attenuated at high frequencies, beyond 3 kHz, but to different extents. The mean SNR50 thresholds of the two auditory-only conditions (surgical mask and N95 mask with face shield) were higher than those of the audiovisual conditions (no mask and transparent mask). SNR50 thresholds in the surgical-mask conditions were significantly lower than those for the N95 masks with face shield. No significant difference was observed between the two audiovisual conditions. The results confirm that wearing a surgical mask or an N95 mask with face shield has a negative impact on speech perception. However, wearing a transparent mask improved speech perception to a similar level as unmasked condition for young normal-hearing listeners.

17.
Front Psychol ; 13: 832100, 2022.
Article in English | MEDLINE | ID: covidwho-1952594

ABSTRACT

Older adults with age-related hearing loss often use hearing aids (HAs) to compensate. However, certain challenges in speech perception, especially in noise still exist, despite today's HA technology. The current study presents an evaluation of a home-based auditory exercises program that can be used during the adaptation process for HA use. The home-based program was developed at a time when telemedicine became prominent in part due to the COVID-19 pandemic. The study included 53 older adults with age-related symmetrical sensorineural hearing loss. They were divided into three groups depending on their experience using HAs. Group 1: Experienced users (participants who used bilateral HAs for at least 2 years). Group 2: New users (participants who were fitted with bilateral HAs for the first time). Group 3: Non-users. These three groups underwent auditory exercises for 3 weeks. The auditory tasks included auditory detection, auditory discrimination, and auditory identification, as well as comprehension with basic (syllables) and more complex (sentences) stimuli, presented in quiet and in noisy listening conditions. All participants completed self-assessment questionnaires before and after the auditory exercises program and underwent a cognitive test at the end. Self-assessed improvements in hearing ability were observed across the HA users groups, with significant changes described by new users. Overall, speech perception in noise was poorer than in quiet. Speech perception accuracy was poorer in the non-users group compared to the users in all tasks. In sessions where stimuli were presented in quiet, similar performance was observed among new and experienced uses. New users performed significantly better than non-users in all speech in noise tasks; however, compared to the experienced users, performance differences depended on task difficulty. The findings indicate that HA users, even new users, had better perceptual performance than their peers who did not receive hearing aids.

18.
Cognitive Research ; 7(1), 2022.
Article in English | ProQuest Central | ID: covidwho-1863875

ABSTRACT

Over the past two years, face masks have been a critical tool for preventing the spread of COVID-19. While previous studies have examined the effects of masks on speech recognition, much of this work was conducted early in the pandemic. Given that human listeners are able to adapt to a wide variety of novel contexts in speech perception, an open question concerns the extent to which listeners have adapted to masked speech during the pandemic. In order to evaluate this, we replicated Toscano and Toscano (PLOS ONE 16(2):e0246842, 2021), looking at the effects of several types of face masks on speech recognition in different levels of multi-talker babble noise. We also examined the effects of listeners’ self-reported frequency of encounters with masked speech and the effects of the implementation of public mask mandates on speech recognition. Overall, we found that listeners’ performance in the current experiment (with data collected in 2021) was similar to that of listeners in Toscano and Toscano (with data collected in 2020) and that performance did not differ based on mask experience. These findings suggest that listeners may have already adapted to masked speech by the time data were collected in 2020, are unable to adapt to masked speech, require additional context to be able to adapt, or that talkers also changed their productions over time. Implications for theories of perceptual learning in speech are discussed.

19.
Front Psychol ; 13: 879123, 2022.
Article in English | MEDLINE | ID: covidwho-1865466

ABSTRACT

Infants have been shown to rely both on auditory and visual cues when processing speech. We investigated the impact of COVID-related changes, in particular of face masks, in early word segmentation abilities. Following up on our previous study demonstrating that, by 4 months, infants already segmented targets presented auditorily at utterance-edge position, and, using the same visual familiarization paradigm, 7-9-month-old infants performed an auditory and an audiovisual word segmentation experiment in two conditions: without and with an FFP2 face mask. Analysis of acoustic and visual cues showed changes in face-masked speech affecting the amount, weight, and location of cues. Utterance-edge position displayed more salient cues than utterance-medial position, but the cues were attenuated in face-masked speech. Results revealed no evidence for segmentation, not even at edge position, regardless of mask condition and auditory or visual speech presentation. However, in the audiovisual experiment, infants attended more to the screen during the test trials when familiarized with without mask speech. Also, the infants attended more to the mouth and less to the eyes in without mask than with mask. In addition, evidence for an advantage of the utterance-edge position in emerging segmentation abilities was found. Thus, audiovisual information provided some support to developing word segmentation. We compared 7-9-monthers segmentation ability observed in the Butler and Frota pre-COVID study with the current auditory without mask data. Mean looking time for edge was significantly higher than unfamiliar in the pre-COVID study only. Measures of cognitive and language development obtained with the CSBS scales showed that the infants of the current study scored significantly lower than the same-age infants from the CSBS (pre-COVID) normative data. Our results suggest an overall effect of the pandemic on early segmentation abilities and language development, calling for longitudinal studies to determine how development proceeds.

20.
Cogn Res Princ Implic ; 7(1): 46, 2022 05 26.
Article in English | MEDLINE | ID: covidwho-1865316

ABSTRACT

Over the past two years, face masks have been a critical tool for preventing the spread of COVID-19. While previous studies have examined the effects of masks on speech recognition, much of this work was conducted early in the pandemic. Given that human listeners are able to adapt to a wide variety of novel contexts in speech perception, an open question concerns the extent to which listeners have adapted to masked speech during the pandemic. In order to evaluate this, we replicated Toscano and Toscano (PLOS ONE 16(2):e0246842, 2021), looking at the effects of several types of face masks on speech recognition in different levels of multi-talker babble noise. We also examined the effects of listeners' self-reported frequency of encounters with masked speech and the effects of the implementation of public mask mandates on speech recognition. Overall, we found that listeners' performance in the current experiment (with data collected in 2021) was similar to that of listeners in Toscano and Toscano (with data collected in 2020) and that performance did not differ based on mask experience. These findings suggest that listeners may have already adapted to masked speech by the time data were collected in 2020, are unable to adapt to masked speech, require additional context to be able to adapt, or that talkers also changed their productions over time. Implications for theories of perceptual learning in speech are discussed.


Subject(s)
COVID-19 , Speech Perception , Humans , Masks , Noise , Speech
SELECTION OF CITATIONS
SEARCH DETAIL